-
Notifications
You must be signed in to change notification settings - Fork 25.5k
Update service-openai.asciidoc #125419
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update service-openai.asciidoc #125419
Conversation
Many customers want to use our OpenAI Inference endpoint against OpenAI compatible API's they have written, or Ollama, or Nvidia Triton OpenAI API front end. I had heard that was the intent of this OpenAI inference endpoint, but we do not state it directly. Can we validate this is OK with Search PM and include this?
Documentation preview: |
@bradquarry please enable the option "Allow edits and access to secrets by maintainers" on your PR. For more information, see the documentation. |
I don't see any button to allow edits by maintainers unfortunately. |
@szabosteve I assigned this to you. This probably needs to be updated in the v9 docs as well. @serenachou @aznick are you ok with this change? |
Pinging @elastic/es-docs (Team:Docs) |
Confirmed that this change is okay. Refer to this Slack thread for further info. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thank you!
💔 Backport failedYou can use sqren/backport to manually backport by running |
Many customers want to use our OpenAI Inference endpoint against OpenAI compatible API's they have written, or Ollama, or Nvidia Triton OpenAI API front end. I had heard that was the intent of this OpenAI inference endpoint, but we do not state it directly. Can we validate this is OK with Search PM and include this? Co-authored-by: István Zoltán Szabó <[email protected]>
* Update service-openai.asciidoc (#125419) Many customers want to use our OpenAI Inference endpoint against OpenAI compatible API's they have written, or Ollama, or Nvidia Triton OpenAI API front end. I had heard that was the intent of this OpenAI inference endpoint, but we do not state it directly. Can we validate this is OK with Search PM and include this? Co-authored-by: István Zoltán Szabó <[email protected]> * Update docs/reference/inference/service-openai.asciidoc --------- Co-authored-by: Brad Quarry <[email protected]>
* Update service-openai.asciidoc (#125419) Many customers want to use our OpenAI Inference endpoint against OpenAI compatible API's they have written, or Ollama, or Nvidia Triton OpenAI API front end. I had heard that was the intent of this OpenAI inference endpoint, but we do not state it directly. Can we validate this is OK with Search PM and include this? Co-authored-by: István Zoltán Szabó <[email protected]> * Update docs/reference/inference/service-openai.asciidoc --------- Co-authored-by: Brad Quarry <[email protected]>
Many customers want to use our OpenAI Inference endpoint against OpenAI compatible API's they have written, or Ollama, or Nvidia Triton OpenAI API front end. I had heard that was the intent of this OpenAI inference endpoint, but we do not state it directly. Can we validate this is OK with Search PM and include this?
gradle check
?